Anchors: High-Precision Model-Agnostic Explanations
نویسندگان
چکیده
We introduce a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, “sufficient” conditions for predictions. We propose an algorithm to efficiently compute these explanations for any black-box model with high-probability guarantees. We demonstrate the flexibility of anchors by explaining a myriad of different models for different domains and tasks. In a user study, we show that anchors enable users to predict how a model would behave on unseen instances with less effort and higher precision, as compared to existing linear explanations or no explanations.
منابع مشابه
Machine Learning Model Interpretability for Precision Medicine
Interpretability of machine learning models is critical for data-driven precision medicine efforts. However, highly predictive models are generally complex and are difficult to interpret. Here using Model-Agnostic Explanations algorithm, we show that complex models such as random forest can be made interpretable. Using MIMIC-II dataset, we successfully predicted ICU mortality with 80% balanced ...
متن کاملNothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance
At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a model’s behavior. Assumed in this question are three properties of the interpretable output: coverage, precision, and effort. Coverage refers to how often humans think they can predict the model’s behavior, precision to how accurate humans are in those predictions, and e...
متن کاملLocal Interpretable Model-Agnostic Explanations for Music Content Analysis
The interpretability of a machine learning model is essential for gaining insight into model behaviour. While some machine learning models (e.g., decision trees) are transparent, the majority of models used today are still black-boxes. Recent work in machine learning aims to analyse these models by explaining the basis of their decisions. In this work, we extend one such technique, called local...
متن کاملExplain Yourself: A Natural Language Interface for Scrutable Autonomous Robots
Autonomous systems in remote locations have a high degree of autonomy and there is a need to explain what they are doing and why in order to increase transparency and maintain trust. Here, we describe a natural language chat interface that enables vehicle behaviour to be queried by the user. We obtain an interpretable model of autonomy through having an expert ‘speak out-loud’ and provide expla...
متن کاملExplanations of model predictions with live and breakDown packages
Complex models are commonly used in predictive modeling. In this paper we present R packages that can be used to explain predictions from complex black box models and attribute parts of these predictions to input features. We introduce two new approaches and corresponding packages for such attribution, namely live and breakDown. We also compare their results with existing implementations of sta...
متن کامل